نتایج جستجو برای: Quasi-Newton method
تعداد نتایج: 1708814 فیلتر نتایج به سال:
artificial neural networks have the advantages such as learning, adaptation, fault-tolerance, parallelism and generalization. this paper is a scrutiny on the application of diverse learning methods in speed of convergence in neural networks. for this aim, first we introduce a perceptron method based on artificial neural networks which has been applied for solving a non-singula...
A new family of limited-memory variationally-derived variable metric or quasi-Newton methods for unconstrained minimization is given. The methods have quadratic termination property and use updates, invariant under linear transformations. Some encouraging numerical experience is reported.
Although quasi-Newton algorithms generally converge in fewer iterations than conjugate gradient algorithms, they have the disadvantage of requiring substantially more storage. An algorithm will be described which uses an intermediate (and variable) amount of storage and which demonstrates convergence which is also intermediate, that is, generally better than that observed for conjugate gradient...
Ensemble learning with output from multiple supervised and unsupervised models aims to improve the classification accuracy of supervised model ensemble by jointly considering the grouping results from unsupervised models. In this paper we cast this ensemble task as an unconstrained probabilistic embedding problem. Specifically, we assume both objects and classes/clusters have latent coordinates...
The smoothness-constrained least-squares method is widely used for two-dimensional (2D) and three-dimensional (3D) inversion of apparent resistivity data sets. The Gauss–Newton method that recalculates the Jacobian matrix of partial derivatives for all iterations is commonly used to solve the least-squares equation. The quasi-Newton method has also been used to reduce the computer time. In this...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید